260 research outputs found

    Default Estimation, Correlated Defaults, and Expert Information

    Get PDF
    Capital allocation decisions are made on the basis of an assessment of creditworthiness. Default is a rare event for most segments of a bank's portfolio and data information can be minimal. Inference about default rates is essential for efficient capital allocation, for risk management and for compliance with the requirements of the Basel II rules on capital standards for banks. Expert information is crucial in inference about defaults. A Bayesian approach is proposed and illustrated using prior distributions assessed from industry experts. A maximum entropy approach is used to represent expert information. The binomial model, most common in applications, is extended to allow correlated defaults yet remain consistent with Basel II. The application shows that probabilistic information can be elicited from experts and econometric methods can be useful even when data information is sparse.

    Heteroskedasticity-Autocorrelation Robust Standard Errors Using the Bartlett Kernel without Truncation

    Get PDF
    In this paper we analyze heteroskedasticity-autocorrelation (HAC) robust tests constructed using the Bartlett kernel without truncation. We show that while such an HAC estimator is not consistent, asymptotically valid testing is still possible. We show that tests using the Bartlett kernel without truncation are exactly equivalent to recent HAC robust tests proposed by Kiefer, Vogelsang and Bunzel (2000, Econometrica, 68, pp 695-714).

    The Maximum Entropy Distribution for Stochastically Ordered Random Variables with Fixed Marginals

    Get PDF
    Stochastically ordered random variables with given marginal distributions are combined into a joint distribution preserving the ordering and the marginals using a maximum entropy formulation. A closed-form expression is obtained. An application is in default estimation for different portfolio segments, where priors on the individual default probabilities are available and the stochastic ordering is agreeable to separate experts. The ME formulation allows an efficiency improvement over separate analyses.

    Default Estimation for Low-Default Portfolios

    Get PDF
    The problem in default probability estimation for low-default portfolios is that there is little relevant historical data information. No amount of data processing can fix this problem. More information is required. Incorporating expert opinion formally is an attractive option.

    The Smooth Colonel Meets the Reverend

    Get PDF
    Kernel smoothing techniques have attracted much attention and some notoriety in recent years. The attention is well deserved as kernel methods free researchers from having to impose rigid parametric structure on their data. The notoriety arises from the fact that the amount of smoothing (i.e., local averaging) that is appropriate for the problem at hand is under the control of the researcher. In this paper we provide a deeper understanding of kernel smoothing methods for discrete data by leveraging the unexplored links between hierarchical Bayesmodels and kernelmethods for discrete processes. A number of potentially useful results are thereby obtained, including bounds on when kernel smoothing can be expected to dominate non-smooth (e.g., parametric) approaches in mean squared error and suggestions for thinking about the appropriate amount of smoothing.

    Evidence of non-Markovian behavior in the process of bank rating migrations

    Get PDF
    This paper estimates transition matrices for the ratings on financial insti-tutions, using an unusually informative data set. We show that the process of rating migration exhibits significant non-Markovian behavior, in the sense that the transition intensities are affected by macroeconomic and bank spe- cific variables. We illustrate how the use of a continuous time framework may improve the estimation of the transition probabilities. However, the time homogeneity assumption, frequently done in economic applications, does not hold, even for short time intervals. Thus, the information provided by migrations alone is not enough to forecast the future behavior of ratings. The stage of the business cycle should be taken into account, and individual characteristics of banks must be considered as well.Financial institutions; macroeconomic variables; capitaliza- tion; supervision; transition intensities. Classification JEL: C4; E44; G21; G23; G38.

    A Simulation Estimator for Testing the Time Homogeneity of Credit Rating Transition

    Get PDF
    The measurement of credit quality is at the heart of the models designed to assess the reserves and capital needed to support the risks of both individual credits and portfolios of credit instruments. A popular specification for credit- rating transitions is the simple, time-homogeneous Markov model. While the Markov specification cannot really describe processes in the long run, it may be useful for adequately describing short-run changes in portfolio risk. In this specification, the entire stochastic process can be characterized in terms of estimated transition probabilities. However, the simple homogeneous Markovian transition framework is restrictive. We propose a test of the null hypotheses of time-homogeneity that can be performed on the sorts of data often reported. We apply the tests to 4 data sets, on commercial paper, sovereign debt, municipal bonds and S&P Corporates. The results indicate that commercial paper looks Markovian on a 30-day time scale for up to 6 months; sovereign debt also looks Markovian (perhaps due to a small sample size); municipals are well-modeled by the Markov specification for up to 5 years, but could probably benefit from frequent updating of the estimated transition matrix or from more sophisticated modeling, and S&P Corporate ratings are approximately Markov over 3 transitions but not 4.

    Robust Model Selection in Dynamic Models with an Application to Comparing Predictive Accuracy

    Get PDF
    A model selection procedure based on a general criterion function, with an example of the Kullback-Leibler Information Criterion (KLIC) using quasi-likelihood functions, is considered for dynamic non-nested models. We propose a robust test which generalizes Lien and Vuong's (1987) test with a Heteroscadasticity/Autocorrelation Consistent (HAC) variance estimator. We use the fixed-b asymptotics developed in Kiefer and Vogelsang (2005) to improve the asymptotic approximation to the sampling distribution of the test statistic. The fixed-b approach is compared with a bootstrap method and the standard normal approximation in Monte Carlo simulations. The fixed-b asymptotics and the bootstrap method are found to be markedly superior to the standard normal approximation. An empirical application for foreign exchange rate forecasting models is presented.

    Specification and Informational Issues in Credit Scoring

    Get PDF
    Lenders use rating and scoring models to rank credit applicants on their expected performance. The models and approaches are numerous. We explore the possibility that estimates generated by models developed with data drawn solely from extended loans are less valuable than they should be because of selectivity bias. We investigate the value of "reject inference"--methods that use a rejected applicant's characteristics, rather than loan performance data, in scoring model development. In the course of making this investigation, we also discuss the advantages of using parametric as well as nonparametric modeling. These issues are discussed and illustrated in the context of a simple stylized model.

    Geometry of the Log-Likelihood Ratio Statistic in Misspecified Models

    Get PDF
    We show that the asymptotic mean of the log-likelihood ratio in a misspecified model is a differential geometric quantity that is related to the exponential curvature of Efron (1978), Amari (1982), and the preferred point geometry of Critchley et al. (1993, 1994). The mean is invariant with respect to reparametrization, which leads to the differential geometrical approach where coordinate-system invariant quantities like statistical curvatures play an important role. When models are misspecified, the likelihood ratios do not have the chi-squared asymptotic limit, and the asymptotic mean of the likelihood ratio depends on two geometric factors, the departure of models from exponential families (i.e. the exponential curvature) and the departure of embedding spaces from being totally flat in the sense of Critchley et al. (1994). As a special case, the mean becomes the mean of the usual chi-squared limit (i.e. the half of the degrees of freedom) when these two curvatures vanish. The effect of curvatures is shown in the non-nested hypothesis testing approach of Vuong (1989), and we correct the numerator of the test statistic with an estimated asymptotic mean of the log-likelihood ratio to improve the asymptotic approximation to the sampling distribution of the test statistic.
    • 

    corecore